governance structure
Financial Fraud Identification and Interpretability Study for Listed Companies Based on Convolutional Neural Network
Since the emergence of joint-stock companies, financial fraud by listed firms has repeatedly undermined capital markets. Fraud is difficult to detect because of covert tactics and the high labor and time costs of audits. Traditional statistical models are interpretable but struggle with nonlinear feature interactions, while machine learning models are powerful but often opaque. In addition, most existing methods judge fraud only for the current year based on current year data, limiting timeliness. This paper proposes a financial fraud detection framework for Chinese A-share listed companies based on convolutional neural networks (CNNs). We design a feature engineering scheme that transforms firm-year panel data into image like representations, enabling the CNN to capture cross-sectional and temporal patterns and to predict fraud in advance. Experiments show that the CNN outperforms logistic regression and LightGBM in accuracy, robustness, and early-warning performance, and that proper tuning of the classification threshold is crucial in high-risk settings. To address interpretability, we analyze the model along the dimensions of entity, feature, and time using local explanation techniques. We find that solvency, ratio structure, governance structure, and internal control are general predictors of fraud, while environmental indicators matter mainly in high-pollution industries. Non-fraud firms share stable feature patterns, whereas fraud firms exhibit heterogeneous patterns concentrated in short time windows. A case study of Guanong Shares in 2022 shows that cash flow analysis, social responsibility, governance structure, and per-share indicators are the main drivers of the model's fraud prediction, consistent with the company's documented misconduct.
An Adaptive Responsible AI Governance Framework for Decentralized Organizations
Meimandi, Kiana Jafari, Reuel, Anka, Aranguiz-Dias, Gabriela, Rahama, Hatim, Ayadi, Ala-Eddine, Boullier, Xavier, Verdo, Jérémy, Montanie, Louis, Kochenderfer, Mykel
This paper examines the assessment challenges of Responsible AI (RAI) governance efforts in globally decentralized organizations through a case study collaboration between a leading research university and a multinational enterprise. While there are many proposed frameworks for RAI, their application in complex organizational settings with distributed decision-making authority remains underexplored. Our RAI assessment, conducted across multiple business units and AI use cases, reveals four key patterns that shape RAI implementation: (1) complex interplay between group-level guidance and local interpretation, (2) challenges translating abstract principles into operational practices, (3) regional and functional variation in implementation approaches, and (4) inconsistent accountability in risk oversight. Based on these findings, we propose an Adaptive RAI Governance (ARGO) Framework that balances central coordination with local autonomy through three interdependent layers: shared foundation standards, central advisory resources, and contextual local implementation. We contribute insights from academic-industry collaboration for RAI assessments, highlighting the importance of modular governance approaches that accommodate organizational complexity while maintaining alignment with responsible AI principles. These lessons offer practical guidance for organizations navigating the transition from RAI principles to operational practice within decentralized structures.
- South America > Argentina > Patagonia > Río Negro Province > Viedma (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Italy (0.04)
- Asia > China (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Oversight Structures for Agentic AI in Public-Sector Organizations
Schmitz, Chris, Rystrøm, Jonathan, Batzner, Jan
This paper finds that the introduction of agentic AI systems intensifies existing challenges to traditional public sector oversight mechanisms -- which rely on siloed compliance units and episodic approvals rather than continuous, integrated supervision. We identify five governance dimensions essential for responsible agent deployment: cross-departmental implementation, comprehensive evaluation, enhanced security protocols, operational visibility, and systematic auditing. We evaluate the capacity of existing oversight structures to meet these challenges, via a mixed-methods approach consisting of a literature review and interviews with civil servants in AI-related roles. We find that agent oversight poses intensified versions of three existing governance challenges: continuous oversight, deeper integration of governance and operational capabilities, and interdepartmental coordination. We propose approaches that both adapt institutional structures and design agent oversight compatible with public sector constraints.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- (7 more...)
- Questionnaire & Opinion Survey (0.94)
- Personal > Interview (0.68)
- Research Report > New Finding (0.46)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.97)
- Information Technology > Artificial Intelligence > Machine Learning (0.68)
Environment Scan of Generative AI Infrastructure for Clinical and Translational Science
Idnay, Betina, Xu, Zihan, Adams, William G., Adibuzzaman, Mohammad, Anderson, Nicholas R., Bahroos, Neil, Bell, Douglas S., Bumgardner, Cody, Campion, Thomas, Castro, Mario, Cimino, James J., Cohen, I. Glenn, Dorr, David, Elkin, Peter L, Fan, Jungwei W., Ferris, Todd, Foran, David J., Hanauer, David, Hogarth, Mike, Huang, Kun, Kalpathy-Cramer, Jayashree, Kandpal, Manoj, Karnik, Niranjan S., Katoch, Avnish, Lai, Albert M., Lambert, Christophe G., Li, Lang, Lindsell, Christopher, Liu, Jinze, Lu, Zhiyong, Luo, Yuan, McGarvey, Peter, Mendonca, Eneida A., Mirhaji, Parsa, Murphy, Shawn, Osborne, John D., Paschalidis, Ioannis C., Harris, Paul A., Prior, Fred, Shaheen, Nicholas J., Shara, Nawar, Sim, Ida, Tachinardi, Umberto, Waitman, Lemuel R., Wright, Rosalind J., Zai, Adrian H., Zheng, Kai, Lee, Sandra Soo-Jin, Malin, Bradley A., Natarajan, Karthik, Price, W. Nicholson II, Zhang, Rui, Zhang, Yiye, Xu, Hua, Bian, Jiang, Weng, Chunhua, Peng, Yifan
This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the Clinical and Translational Science Award (CTSA) Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States. With the rapid advancement of GenAI technologies, including large language models (LLMs), healthcare institutions face unprecedented opportunities and challenges. This research explores the current status of GenAI integration, focusing on stakeholder roles, governance structures, and ethical considerations by administering a survey among leaders of health institutions (i.e., representing academic medical centers and health systems) to assess the institutional readiness and approach towards GenAI adoption. Key findings indicate a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. The study highlights significant variations in governance models, with a strong preference for centralized decision-making but notable gaps in workforce training and ethical oversight. Moreover, the results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers. Our analysis also reveals concerns regarding GenAI bias, data security, and stakeholder trust, which must be addressed to ensure the ethical and effective implementation of GenAI technologies. This study offers valuable insights into the challenges and opportunities of GenAI integration in healthcare, providing a roadmap for institutions aiming to leverage GenAI for improved quality of care and operational efficiency.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.28)
- (39 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance
Wasil, Akash R., Barnett, Peter, Gerovitch, Michael, Hauksson, Roman, Reed, Tom, Miller, Jack William
International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.
- Asia > Russia (1.00)
- Europe > Russia (0.32)
- Asia > Middle East > Iran (0.31)
- (15 more...)
- Overview (1.00)
- Research Report (0.82)
Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry
Mökander, Jakob, Sheth, Margi, Gersbro-Sundler, Mimmi, Blomgren, Peder, Floridi, Luciano
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
How Anthropic Designed Itself to Avoid OpenAI's Mistakes
Last Thanksgiving, Brian Israel found himself being asked the same question again and again. The general counsel at the AI lab Anthropic had been watching dumbfounded along with the rest of the tech world as, just two miles south of Anthropic's headquarters in San Francisco, its main competitor OpenAI seemed to be imploding. OpenAI's board had fired CEO Sam Altman, saying he had lost their confidence, in a move that seemed likely to tank the startup's 80 billion-plus valuation. The firing was only possible thanks to OpenAI's strange corporate structure, in which its directors have no fiduciary duty to increase profits for shareholders--a structure Altman himself had helped design so that OpenAI could build powerful AI insulated from perverse market incentives. To many, it appeared that plan had badly backfired.
- Asia > Middle East > Israel (0.30)
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI reinstates CEO Sam Altman to board after firing and rehiring
OpenAI is reinstating CEO Sam Altman to its board of directors and said it has "full confidence" in his leadership after an outside investigation into the turmoil that led the company to abruptly fire and rehire him in November. OpenAI said the investigation by the law firm WilmerHale concluded that Altman's ouster had been a "consequence of a breakdown in the relationship and loss of trust" between Altman and the prior board. The ChatGPT maker also said it has added three women to its board of directors: Sue Desmond-Hellman, a former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Instacart CEO Fidji Simo. The actions are a way for the San Francisco-based artificial intelligence company to show investors and customers that it is trying to move past the internal conflicts that nearly destroyed it last year and made global headlines. "I'm pleased this whole thing is over," Altman told reporters Friday, adding that he's been disheartened to see people leaking information to try to "pit us against each other" and demoralize the team. At the same time, he said he's learned from the experience and apologized for a dispute with a former board member he could have handled "with more grace and care".
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI Will Add Microsoft as Board Observer, Plans Governance Changes
OpenAI said that Sam Altman was officially reinstated as chief executive officer and that it has a new initial board of directors, with Microsoft Corp. joining as a nonvoting observer. The announcement Wednesday, a blog post penned by Altman, comes two weeks after the CEO's shock firing from the artificial intelligence startup, followed by an operatic boardroom power struggle. OpenAI also said that Mira Murati -- who had been chief technology officer until Altman's ousting when she was briefly named interim CEO -- is once again the company's CTO. OpenAI co-founder Greg Brockman will return as the company's president after he quit in protest over Altman's firing. Microsoft, the company's largest investor, had not previously had a position on the board before it took the observer role.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The Mystery at the Heart of the OpenAI Chaos
More than three days after OpenAI was thrown into chaos by Sam Altman's sudden firing from his post as CEO, one big question remains unanswered: Why? Altman was removed by OpenAI's non-profit board through an unconventional governance structure that as one of the company's cofounders he helped to create. It gave a small group of individuals wholly independent of the ChatGPT maker's core operations the power to dismiss its leadership, in the name of ensuring humanity-first oversight of its AI technology. The board's brief and somewhat cryptic statement announcing Altman's departure said the directors had "concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." Altman was replaced by CTO Mira Murati who was appointed interim CEO. Greg Brockman, like Altman an OpenAI cofounder, was removed from his post as chair of the board and quit the company in solidarity with Altman several hours later. There have been many twists and turns since Friday, with Altman making a failed attempt to return as CEO, the board replacing Murati as interim CEO with Twitch cofounder Emmett Shear, Microsoft announcing it would hire Altman and Brockman, and almost every OpenAI employee threatening to quit unless Altman returned.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)